最近的研究表明,犯罪网络具有复杂的组织结构,但是是否可以用来预测犯罪网络的静态和动态特性。在这里,通过结合图表学习和机器学习方法,我们表明,可以使用政治腐败,警察情报和洗钱网络的结构性特性来恢复缺失的犯罪伙伴关系,区分不同类型的犯罪和法律协会以及预测犯罪分子之间交换的总金额,所有这些都具有出色的准确性。我们还表明,我们的方法可以预期在腐败网络的动态增长过程中,其准确性很高。因此,与在犯罪现场发现的证据类似,我们得出结论,犯罪网络的结构模式具有有关非法活动的重要信息,这使机器学习方法可以预测缺失的信息,甚至预测未来的犯罪行为。
translated by 谷歌翻译
Code generation from text requires understanding the user's intent from a natural language description (NLD) and generating an executable program code snippet that satisfies this intent. While recent pretrained language models (PLMs) demonstrate remarkable performance for this task, these models fail when the given NLD is ambiguous due to the lack of enough specifications for generating a high-quality code snippet. In this work, we introduce a novel and more realistic setup for this task. We hypothesize that ambiguities in the specifications of an NLD are resolved by asking clarification questions (CQs). Therefore, we collect and introduce a new dataset named CodeClarQA containing NLD-Code pairs with created CQAs. We evaluate the performance of PLMs for code generation on our dataset. The empirical results support our hypothesis that clarifications result in more precise generated code, as shown by an improvement of 17.52 in BLEU, 12.72 in CodeBLEU, and 7.7\% in the exact match. Alongside this, our task and dataset introduce new challenges to the community, including when and what CQs should be asked.
translated by 谷歌翻译
Neural machine translation (NMT) has become the de-facto standard in real-world machine translation applications. However, NMT models can unpredictably produce severely pathological translations, known as hallucinations, that seriously undermine user trust. It becomes thus crucial to implement effective preventive strategies to guarantee their proper functioning. In this paper, we address the problem of hallucination detection in NMT by following a simple intuition: as hallucinations are detached from the source content, they exhibit encoder-decoder attention patterns that are statistically different from those of good quality translations. We frame this problem with an optimal transport formulation and propose a fully unsupervised, plug-in detector that can be used with any attention-based NMT model. Experimental results show that our detector not only outperforms all previous model-based detectors, but is also competitive with detectors that employ large models trained on millions of samples.
translated by 谷歌翻译
Current abstractive summarization systems present important weaknesses which prevent their deployment in real-world applications, such as the omission of relevant information and the generation of factual inconsistencies (also known as hallucinations). At the same time, automatic evaluation metrics such as CTC scores have been recently proposed that exhibit a higher correlation with human judgments than traditional lexical-overlap metrics such as ROUGE. In this work, we intend to close the loop by leveraging the recent advances in summarization metrics to create quality-aware abstractive summarizers. Namely, we propose an energy-based model that learns to re-rank summaries according to one or a combination of these metrics. We experiment using several metrics to train our energy-based re-ranker and show that it consistently improves the scores achieved by the predicted summaries. Nonetheless, human evaluation results show that the re-ranking approach should be used with care for highly abstractive summaries, as the available metrics are not yet sufficiently reliable for this purpose.
translated by 谷歌翻译
我们介绍了IST和Unmabel对WMT 2022关于质量估计(QE)的共享任务的共同贡献。我们的团队参与了所有三个子任务:(i)句子和单词级质量预测;(ii)可解释的量化宽松;(iii)关键错误检测。对于所有任务,我们在彗星框架之上构建,将其与OpenKIWI的预测估计架构连接,并为其配备单词级序列标记器和解释提取器。我们的结果表明,在预处理过程中合并参考可以改善下游任务上多种语言对的性能,并且通过句子和单词级别的目标共同培训可以进一步提高。此外,将注意力和梯度信息结合在一起被证明是提取句子级量化量化宽松模型的良好解释的首要策略。总体而言,我们的意见书在几乎所有语言对的所有三个任务中都取得了最佳的结果。
translated by 谷歌翻译
从有限的资源中获得最大收益可以进步自然语言处理(NLP)研究和实践,同时保守资源。这些资源可能是数据,时间,存储或能源。NLP的最新工作从缩放率产生了有趣的结果。但是,仅使用比例来改善结果意味着资源消耗也会扩展。这种关系激发了对有效方法的研究,这些方法需要更少的资源才能获得相似的结果。这项调查涉及NLP效率的方法和发现,旨在指导该领域的新研究人员并激发新方法的发展。
translated by 谷歌翻译
科学机器学习的进步改善了现代计算科学和工程应用。数据驱动的方法(例如动态模式分解(DMD))可以从动态系统生成的时空数据中提取相干结构,并推断上述系统的不同方案。时空数据作为快照,每次瞬间包含空间信息。在现代工程应用中,高维快照的产生可能是时间和/或资源要求。在本研究中,我们考虑了在大型数值模拟中增强DMD工作流程的两种策略:(i)快照压缩以减轻磁盘压力; (ii)使用原位可视化图像在运行时重建动力学(或部分)。我们通过两个3D流体动力学模拟评估我们的方法,并考虑DMD重建解决方案。结果表明,快照压缩大大减少了所需的磁盘空间。我们已经观察到,损耗的压缩将存储降低了几乎$ 50 \%$,而信号重建和其他关注数量的相对错误则较低。我们还使用原位可视化工具将分析扩展到了直接生成的数据,在运行时生成状态向量的图像文件。在大型模拟中,快照的产生可能足够慢,可以使用批处理算法进行推理。流DMD利用增量SVD算法,并随着每个新快照的到来更新模式。我们使用流式DMD来重建原位生成的图像的动力学。我们证明此过程是有效的,并且重建的动力学是准确的。
translated by 谷歌翻译
尽管神经机器翻译(NMT)中幻觉的问题受到了一些关注,但对这种高度病理现象的研究缺乏坚实的基础。以前的工作在几种方面受到限制:它通常诉诸于放大问题的人工环境,它无视一些(常见的)幻觉类型,并且不能验证检测启发式方法的充分性。在本文中,我们为研究NMT幻觉的研究设定了基础。首先,我们在自然环境中工作,即没有人造噪声的内域数据,既不在训练中也没有推理。接下来,我们注释一个超过3.4K句子的数据集,指示不同类型的关键错误和幻觉。然后,我们转向以前使用的检测方法和两种重新访问方法,并建议使用基于玻璃盒的不确定性检测器。总体而言,我们表明,对于预防性设置,(i)先前使用的方法在很大程度上不足,(ii)序列对数概要性效果最好,并且与基于参考的方法相同。最后,我们提出了脱足素剂,这是一种减轻测试时间的简单方法,可大大降低幻觉速度。为了简化未来的研究,我们发布了用于WMT18德语英语数据的注释数据集以及模型,培训数据和代码。
translated by 谷歌翻译
Semi-parametric models, which augment generation with retrieval, have led to impressive results in language modeling and machine translation, due to their ability to retrieve fine-grained information from a datastore of examples. One of the most prominent approaches, $k$NN-MT, exhibits strong domain adaptation capabilities by retrieving tokens from domain-specific datastores \citep{khandelwal2020nearest}. However, $k$NN-MT requires an expensive retrieval operation for every single generated token, leading to a very low decoding speed (around 8 times slower than a parametric model). In this paper, we introduce a \textit{chunk-based} $k$NN-MT model which retrieves chunks of tokens from the datastore, instead of a single token. We propose several strategies for incorporating the retrieved chunks into the generation process, and for selecting the steps at which the model needs to search for neighbors in the datastore. Experiments on machine translation in two settings, static and ``on-the-fly'' domain adaptation, show that the chunk-based $k$NN-MT model leads to significant speed-ups (up to 4 times) with only a small drop in translation quality.
translated by 谷歌翻译
Modern machine learning models are opaque, and as a result there is a burgeoning academic subfield on methods that explain these models' behavior. However, what is the precise goal of providing such explanations, and how can we demonstrate that explanations achieve this goal? Some research argues that explanations should help teach a student (either human or machine) to simulate the model being explained, and that the quality of explanations can be measured by the simulation accuracy of students on unexplained examples. In this work, leveraging meta-learning techniques, we extend this idea to improve the quality of the explanations themselves, specifically by optimizing explanations such that student models more effectively learn to simulate the original model. We train models on three natural language processing and computer vision tasks, and find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods. Through human annotations and a user study, we further find that these learned explanations more closely align with how humans would explain the required decisions in these tasks. Our code is available at https://github.com/coderpat/learning-scaffold
translated by 谷歌翻译